Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 116
Filtrar
1.
Phys Med Biol ; 69(8)2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38471171

RESUMO

Objective.The aim of this study was to reconstruct volumetric computed tomography (CT) images in real-time from ultra-sparse two-dimensional x-ray projections, facilitating easier navigation and positioning during image-guided radiation therapy.Approach.Our approach leverages a voxel-sapce-searching Transformer model to overcome the limitations of conventional CT reconstruction techniques, which require extensive x-ray projections and lead to high radiation doses and equipment constraints.Main results.The proposed XTransCT algorithm demonstrated superior performance in terms of image quality, structural accuracy, and generalizability across different datasets, including a hospital set of 50 patients, the large-scale public LIDC-IDRI dataset, and the LNDb dataset for cross-validation. Notably, the algorithm achieved an approximately 300% improvement in reconstruction speed, with a rate of 44 ms per 3D image reconstruction compared to former 3D convolution-based methods.Significance.The XTransCT architecture has the potential to impact clinical practice by providing high-quality CT images faster and with substantially reduced radiation exposure for patients. The model's generalizability suggests it has the potential applicable in various healthcare settings.


Assuntos
Radioterapia Guiada por Imagem , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Raios X , Tomografia Computadorizada de Feixe Cônico/métodos , Imageamento Tridimensional , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas
2.
Biomed Phys Eng Express ; 10(3)2024 Mar 28.
Artigo em Inglês | MEDLINE | ID: mdl-38498928

RESUMO

Objective.Low-coupling seamless integration of multiple systems is the core foundation of smart radiotherapy. Following Service-Oriented Architecture style, a set of named operations (Eclipse Web Service API, EWSAPI) was developed for realizing network call of Eclipse.Approach.Under the guidance of Vertical Slice Architecture, EWSAPI was implemented in the C# language and based on ASP .Net Core 6.0. Each operation consists of three components: Request, Endpoint and Response. Depending on the function, the exchanged data for each operation, as input or output parameters, is the empty or a predefined JSON data. These operations were realized and enriched gradually, layer by layer, with reference to the clinical business classification. The business logic of each operation was developed and maintained independently. In situations where Eclipse Scripting API(ESAPI) was required, constraints of ESAPI were followed.Main results.Selected features of Eclipse TPS were encapsulated as standard web services, which can be invocated by other software through network. Several processes for data quality control and planning were encapsulated into interfaces, thereby extending the functionality of Eclipse. Currently, EWSAPI already covers testing of service interface, quality control of radiotherapy data, automation tasks for plan designing and DICOM RT files' transmission. All the interfaces support asynchronous invocation. A separate Eclipse context will be created for each invocation, and is released in the end.Significance.EWSAPI which is a set of standard web services for calling Eclipse features through network is flexible and extensible. It is an efficient way to integration of Eclipse and other systems and will be gradually enriched with the deepening of clinical applications.


Assuntos
Planejamento da Radioterapia Assistida por Computador , Radioterapia de Intensidade Modulada , Planejamento da Radioterapia Assistida por Computador/métodos , Dosagem Radioterapêutica , Software , Radioterapia de Intensidade Modulada/métodos , Controle de Qualidade
3.
Comput Med Imaging Graph ; 112: 102336, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38244280

RESUMO

Rigid pre-registration involving local-global matching or other large deformation scenarios is crucial. Current popular methods rely on unsupervised learning based on grayscale similarity, but under circumstances where different poses lead to varying tissue structures, or where image quality is poor, these methods tend to exhibit instability and inaccuracies. In this study, we propose a novel method for medical image registration based on arbitrary voxel point of interest matching, called query point quizzer (QUIZ). QUIZ focuses on the correspondence between local-global matching points, specifically employing CNN for feature extraction and utilizing the Transformer architecture for global point matching queries, followed by applying average displacement for local image rigid transformation.We have validated this approach on a large deformation dataset of cervical cancer patients, with results indicating substantially smaller deviations compared to state-of-the-art methods. Remarkably, even for cross-modality subjects, it achieves results surpassing the current state-of-the-art.


Assuntos
Algoritmos , Neoplasias do Colo do Útero , Feminino , Humanos , Neoplasias do Colo do Útero/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
4.
IEEE Trans Med Imaging ; PP2024 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-38194400

RESUMO

During the process of computed tomography (CT), metallic implants often cause disruptive artifacts in the reconstructed images, impeding accurate diagnosis. Many supervised deep learning-based approaches have been proposed for metal artifact reduction (MAR). However, these methods heavily rely on training with paired simulated data, which are challenging to acquire. This limitation can lead to decreased performance when applying these methods in clinical practice. Existing unsupervised MAR methods, whether based on learning or not, typically work within a single domain, either in the image domain or the sinogram domain. In this paper, we propose an unsupervised MAR method based on the diffusion model, a generative model with a high capacity to represent data distributions. Specifically, we first train a diffusion model using CT images without metal artifacts. Subsequently, we iteratively introduce the diffusion priors in both the sinogram domain and image domain to restore the degraded portions caused by metal artifacts. Besides, we design temporally dynamic weight masks for the image-domian fusion. The dual-domain processing empowers our approach to outperform existing unsupervised MAR methods, including another MAR method based on diffusion model. The effectiveness has been qualitatively and quantitatively validated on synthetic datasets. Moreover, our method demonstrates superior visual results among both supervised and unsupervised methods on clinical datasets. Codes are available in github.com/DeepXuan/DuDoDp-MAR.

5.
Acta Radiol ; 65(1): 41-48, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37071506

RESUMO

BACKGROUND: Computed tomography (CT) and magnetic resonance imaging (MRI) are indicated for use in preoperative planning and may complicate diagnosis and place a burden on patients with lumbar disc herniation. PURPOSE: To investigate the diagnostic potential of MRI-based synthetic CT with conventional CT in the diagnosis of lumbar disc herniation. MATERIAL AND METHODS: After obtaining prior institutional review board approval, 19 patients who underwent conventional and synthetic CT imaging were enrolled in this prospective study. Synthetic CT images were generated from the MRI data using U-net. The two sets of images were compared and analyzed qualitatively by two musculoskeletal radiologists. The images were rated on a 4-point scale to determine their subjective quality. The agreement between the conventional and synthetic images for a diagnosis of lumbar disc herniation was determined independently using the kappa statistic. The diagnostic performances of conventional and synthetic CT images were evaluated for sensitivity, specificity, and accuracy, and the consensual results based on T2-weighted imaging were employed as the reference standard. RESULTS: The inter-reader and intra-reader agreement were almost moderate for all evaluated modalities (κ = 0.57-0.79 and 0.47-0.75, respectively). The sensitivity, specificity, and accuracy for detecting lumbar disc herniation were similar for synthetic and conventional CT images (synthetic vs. conventional, reader 1: sensitivity = 91% vs. 81%, specificity = 83% vs. 100%, accuracy = 87% vs. 91%; P < 0.001; reader 2: sensitivity = 84% vs. 81%, specificity = 85% vs. 98%, accuracy = 84% vs. 90%; P < 0.001). CONCLUSION: Synthetic CT images can be used in the diagnostics of lumbar disc herniation.


Assuntos
Deslocamento do Disco Intervertebral , Humanos , Deslocamento do Disco Intervertebral/diagnóstico por imagem , Estudos Prospectivos , Estudos de Viabilidade , Vértebras Lombares/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Imageamento por Ressonância Magnética/métodos
6.
Med Image Anal ; 91: 102984, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37837690

RESUMO

The accurate delineation of organs-at-risk (OARs) is a crucial step in treatment planning during radiotherapy, as it minimizes the potential adverse effects of radiation on surrounding healthy organs. However, manual contouring of OARs in computed tomography (CT) images is labor-intensive and susceptible to errors, particularly for low-contrast soft tissue. Deep learning-based artificial intelligence algorithms surpass traditional methods but require large datasets. Obtaining annotated medical images is both time-consuming and expensive, hindering the collection of extensive training sets. To enhance the performance of medical image segmentation, augmentation strategies such as rotation and Gaussian smoothing are employed during preprocessing. However, these conventional data augmentation techniques cannot generate more realistic deformations, limiting improvements in accuracy. To address this issue, this study introduces a statistical deformation model-based data augmentation method for volumetric medical image segmentation. By applying diverse and realistic data augmentation to CT images from a limited patient cohort, our method significantly improves the fully automated segmentation of OARs across various body parts. We evaluate our framework on three datasets containing tumor OARs from the head, neck, chest, and abdomen. Test results demonstrate that the proposed method achieves state-of-the-art performance in numerous OARs segmentation challenges. This innovative approach holds considerable potential as a powerful tool for various medical imaging-related sub-fields, effectively addressing the challenge of limited data access.


Assuntos
Inteligência Artificial , Neoplasias , Humanos , Algoritmos , Pescoço , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos , Planejamento da Radioterapia Assistida por Computador/métodos
7.
Med Image Anal ; 91: 102998, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37857066

RESUMO

Radiotherapy serves as a pivotal treatment modality for malignant tumors. However, the accuracy of radiotherapy is significantly compromised due to respiratory-induced fluctuations in the size, shape, and position of the tumor. To address this challenge, we introduce a deep learning-anchored, volumetric tumor tracking methodology that employs single-angle X-ray projection images. This process involves aligning the intraoperative two-dimensional (2D) X-ray images with the pre-treatment three-dimensional (3D) planning Computed Tomography (CT) scans, enabling the extraction of the 3D tumor position and segmentation. Prior to therapy, a bespoke patient-specific tumor tracking model is formulated, leveraging a hybrid data augmentation, style correction, and registration network to create a mapping from single-angle 2D X-ray images to the corresponding 3D tumors. During the treatment phase, real-time X-ray images are fed into the trained model, producing the respective 3D tumor positioning. Rigorous validation conducted on actual patient lung data and lung phantoms attests to the high localization precision of our method at lowered radiation doses, thus heralding promising strides towards enhancing the precision of radiotherapy.


Assuntos
Aprendizado Profundo , Neoplasias , Humanos , Imageamento Tridimensional/métodos , Raios X , Tomografia Computadorizada por Raios X/métodos , Neoplasias/diagnóstico por imagem , Neoplasias/radioterapia , Tomografia Computadorizada de Feixe Cônico/métodos
8.
Bioengineering (Basel) ; 10(11)2023 Nov 14.
Artigo em Inglês | MEDLINE | ID: mdl-38002438

RESUMO

The detection of Coronavirus disease 2019 (COVID-19) is crucial for controlling the spread of the virus. Current research utilizes X-ray imaging and artificial intelligence for COVID-19 diagnosis. However, conventional X-ray scans expose patients to excessive radiation, rendering repeated examinations impractical. Ultra-low-dose X-ray imaging technology enables rapid and accurate COVID-19 detection with minimal additional radiation exposure. In this retrospective cohort study, ULTRA-X-COVID, a deep neural network specifically designed for automatic detection of COVID-19 infections using ultra-low-dose X-ray images, is presented. The study included a multinational and multicenter dataset consisting of 30,882 X-ray images obtained from approximately 16,600 patients across 51 countries. It is important to note that there was no overlap between the training and test sets. The data analysis was conducted from 1 April 2020 to 1 January 2022. To evaluate the effectiveness of the model, various metrics such as the area under the receiver operating characteristic curve, receiver operating characteristic, accuracy, specificity, and F1 score were utilized. In the test set, the model demonstrated an AUC of 0.968 (95% CI, 0.956-0.983), accuracy of 94.3%, specificity of 88.9%, and F1 score of 99.0%. Notably, the ULTRA-X-COVID model demonstrated a performance comparable to conventional X-ray doses, with a prediction time of only 0.1 s per image. These findings suggest that the ULTRA-X-COVID model can effectively identify COVID-19 cases using ultra-low-dose X-ray scans, providing a novel alternative for COVID-19 detection. Moreover, the model exhibits potential adaptability for diagnoses of various other diseases.

9.
Phys Med Biol ; 68(24)2023 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-37844603

RESUMO

Objective.Medical image registration represents a fundamental challenge in medical image processing. Specifically, CT-CBCT registration has significant implications in the context of image-guided radiation therapy (IGRT). However, traditional iterative methods often require considerable computational time. Deep learning based methods, especially when dealing with low contrast organs, are frequently entangled in local optimal solutions.Approach.To address these limitations, we introduce a registration method based on volumetric feature points integration with bio-structure-informed guidance. Surface point cloud is generated from segmentation labels during the training stage, with both the surface-registered point pairs and voxel feature point pairs co-guiding the training process, thereby achieving higher registration accuracy.Main results.Our findings have been validated on paired CT-CBCT datasets. In comparison with other deep learning registration methods, our approach has improved the precision by 6%, reaching a state-of-the-art status.Significance.The integration of voxel feature points and bio-structure feature points to guide the training of the medical image registration network has achieved promising results. This provides a meaningful direction for further research in medical image registration and IGRT.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Radioterapia Guiada por Imagem , Tomografia Computadorizada de Feixe Cônico/métodos , Processamento de Imagem Assistida por Computador/métodos , Radioterapia Guiada por Imagem/métodos , Algoritmos
10.
Phys Med Biol ; 68(20)2023 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-37714184

RESUMO

Objective.Computed tomography (CT) is a widely employed imaging technology for disease detection. However, CT images often suffer from ring artifacts, which may result from hardware defects and other factors. These artifacts compromise image quality and impede diagnosis. To address this challenge, we propose a novel method based on dual contrast learning image style transformation network model (DCLGAN) that effectively eliminates ring artifacts from CT images while preserving texture details.Approach. Our method involves simulating ring artifacts on real CT data to generate the uncorrected CT (uCT) data and transforming them into strip artifacts. Subsequently, the DCLGAN synthetic network is applied in the polar coordinate system to remove the strip artifacts and generate a synthetic CT (sCT). We compare the uCT and sCT images to obtain a residual image, which is then filtered to extract the strip artifacts. An inverse polar transformation is performed to obtain the ring artifacts, which are subtracted from the original CT image to produce a corrected image.Main results.To validate the effectiveness of our approach, we tested it using real CT data, simulated data, and cone beam computed tomography images of the patient's brain. The corrected CT images showed a reduction in mean absolute error by 12.36 Hounsfield units (HU), a decrease in root mean square error by 18.94 HU, an increase in peak signal-to-noise ratio by 3.53 decibels (dB), and an improvement in structural similarity index by 9.24%.Significance.These results demonstrate the efficacy of our method in eliminating ring artifacts and preserving image details, making it a valuable tool for CT imaging.

11.
Comput Biol Med ; 165: 107377, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37651766

RESUMO

PURPOSE: Cone-beam computed tomography (CBCT) is widely utilized in modern radiotherapy; however, CBCT images exhibit increased scatter artifacts compared to planning CT (pCT), compromising image quality and limiting further applications. Scatter correction is thus crucial for improving CBCT image quality. METHODS: In this study, we proposed an unsupervised contrastive learning method for CBCT scatter correction. Initially, we transformed low-quality CBCT into high-quality synthetic pCT (spCT) and generated forward projections of CBCT and spCT. By computing the difference between these projections, we obtained a residual image containing image details and scatter artifacts. Image details primarily comprise high-frequency signals, while scatter artifacts consist mainly of low-frequency signals. We extracted the scatter projection signal by applying a low-pass filter to remove image details. The corrected CBCT (cCBCT) projection signal was obtained by subtracting the scatter artifacts projection signal from the original CBCT projection. Finally, we employed the FDK reconstruction algorithm to generate the cCBCT image. RESULTS: To evaluate cCBCT image quality, we aligned the CBCT and pCT of six patients. In comparison to CBCT, cCBCT maintains anatomical consistency and significantly enhances CT number, spatial homogeneity, and artifact suppression. The mean absolute error (MAE) of the test data decreased from 88.0623 ± 26.6700 HU to 17.5086 ± 3.1785 HU. The MAE of fat regions of interest (ROIs) declined from 370.2980 ± 64.9730 HU to 8.5149 ± 1.8265 HU, and the error between their maximum and minimum CT numbers decreased from 572.7528 HU to 132.4648 HU. The MAE of muscle ROIs reduced from 354.7689 ± 25.0139 HU to 16.4475 ± 3.6812 HU. We also compared our proposed method with several conventional unsupervised synthetic image generation techniques, demonstrating superior performance. CONCLUSIONS: Our approach effectively enhances CBCT image quality and shows promising potential for future clinical adoption.


Assuntos
Algoritmos , Tomografia Computadorizada de Feixe Cônico , Humanos , Tomografia Computadorizada de Feixe Cônico/métodos , Artefatos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Espalhamento de Radiação
12.
Comput Biol Med ; 161: 106888, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37244146

RESUMO

X-ray Computed Tomography (CT) techniques play a vitally important role in clinical diagnosis, but radioactivity exposure can also induce the risk of cancer for patients. Sparse-view CT reduces the impact of radioactivity on the human body through sparsely sampled projections. However, images reconstructed from sparse-view sinograms often suffer from serious streaking artifacts. To overcome this issue, we propose an end-to-end attention-based mechanism deep network for image correction in this paper. Firstly, the process is to reconstruct the sparse projection by the filtered back-projection algorithm. Next, the reconstructed results are fed into the deep network for artifact correction. More specifically, we integrate the attention-gating module into U-Net pipelines, whose function is implicitly learning to emphasize relevant features beneficial for a given assignment while restraining background regions. Attention is used to combine the local feature vectors extracted at intermediate stages in the convolutional neural network and the global feature vector extracted from the coarse scale activation map. To improve the performance of our network, we fused a pre-trained ResNet50 model into our architecture. The model was trained and tested using the dataset from The Cancer Imaging Archive (TCIA), which consists of images of various human organs obtained from multiple views. This experience demonstrates that the developed functions are highly effective in removing streaking artifacts while preserving structural details. Additionally, quantitative evaluation of our proposed model shows significant improvement in peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and root mean squared error (RMSE) metrics compared to other methods, with an average PSNR of 33.9538, SSIM of 0.9435, and RMSE of 45.1208 at 20 views. Finally, the transferability of the network was verified using the 2016 AAPM dataset. Therefore, this approach holds great promise in achieving high-quality sparse-view CT images.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Artefatos
13.
Artigo em Inglês | MEDLINE | ID: mdl-37021898

RESUMO

Precise classification of histopathological images is crucial to computer-aided diagnosis in clinical practice. Magnification-based learning networks have attracted considerable attention for their ability to improve performance in histopathological classification. However, the fusion of pyramids of histopathological images at different magnifications is an under-explored area. In this paper, we proposed a novel deep multi-magnification similarity learning (DSML) approach that can be useful for the interpretation of multi-magnification learning framework and easy to visualize feature representation from low-dimension (e.g., cell-level) to high-dimension (e.g., tissue-level), which has overcome the difficulty of understanding cross-magnification information propagation. It uses a similarity cross entropy loss function designation to simultaneously learn the similarity of the information among cross-magnifications. In order to verify the effectiveness of DMSL, experiments with different network backbones and different magnification combinations were designed, and its ability to interpret was also investigated through visualization. Our experiments were performed on two different histopathological datasets: a clinical nasopharyngeal carcinoma and a public breast cancer BCSS2021 dataset. The results show that our method achieved outstanding performance in classification with a higher value of area under curve, accuracy, and F-score than other comparable methods. Moreover, the reasons behind multi-magnification effectiveness were discussed.

14.
IEEE J Biomed Health Inform ; 27(7): 3258-3269, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37099476

RESUMO

Anatomical resection (AR) based on anatomical sub-regions is a promising method of precise surgical resection, which has been proven to improve long-term survival by reducing local recurrence. The fine-grained segmentation of an organ's surgical anatomy (FGS-OSA), i.e., segmenting an organ into multiple anatomic regions, is critical for localizing tumors in AR surgical planning. However, automatically obtaining FGS-OSA results in computer-aided methods faces the challenges of appearance ambiguities among sub-regions (i.e., inter-sub-region appearance ambiguities) caused by similar HU distributions in different sub-regions of an organ's surgical anatomy, invisible boundaries, and similarities between anatomical landmarks and other anatomical information. In this paper, we propose a novel fine-grained segmentation framework termed the "anatomic relation reasoning graph convolutional network" (ARR-GCN), which incorporates prior anatomic relations into the framework learning. In ARR-GCN, a graph is constructed based on the sub-regions to model the class and their relations. Further, to obtain discriminative initial node representations of graph space, a sub-region center module is designed. Most importantly, to explicitly learn the anatomic relations, the prior anatomic-relations among the sub-regions are encoded in the form of an adjacency matrix and embedded into the intermediate node representations to guide framework learning. The ARR-GCN was validated on two FGS-OSA tasks: i) liver segments segmentation, and ii) lung lobes segmentation. Experimental results on both tasks outperformed other state-of-the-art segmentation methods and yielded promising performances by ARR-GCN for suppressing ambiguities among sub-regions.


Assuntos
Fígado , Humanos , Fígado/anatomia & histologia , Fígado/diagnóstico por imagem , Fígado/cirurgia , Neoplasias
15.
IEEE Trans Med Imaging ; 42(5): 1495-1508, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37015393

RESUMO

A novel method is proposed to obtain four-dimensional (4D) cone-beam computed tomography (CBCT) images from a routine scan in patients with upper abdominal cancer. The projections are sorted according to the location of the lung diaphragm before being reconstructed to phase-sorted data. A multiscale-discriminator generative adversarial network (MSD-GAN) is proposed to alleviate the severe streaking artifacts in the original images. The MSD-GAN is trained using simulated CBCT datasets from patient planning CT images. The enhanced images are further used to estimate the deformable vector field (DVF) among breathing phases using a deformable image registration method. The estimated DVF is then applied in the motion-compensated ordered-subset simultaneous algebraic reconstruction approach to generate 4D CBCT images. The proposed MSD-GAN is compared with U-Net on the performance of image enhancement. Results show that the proposed method significantly outperforms the total variation regularization-based iterative reconstruction approach and the method using only MSD-GAN to enhance original phase-sorted images in simulation and patient studies on 4D reconstruction quality. The MSD-GAN also shows higher accuracy than the U-Net. The proposed method enables a practical way for 4D-CBCT imaging from a single routine scan in upper abdominal cancer treatment including liver and pancreatic tumors.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Aprendizado Profundo , Aumento da Imagem , Neoplasias , Tomografia Computadorizada de Feixe Cônico/métodos , Conjuntos de Dados como Assunto , Neoplasias/diagnóstico por imagem
16.
BMC Med Inform Decis Mak ; 23(1): 64, 2023 04 06.
Artigo em Inglês | MEDLINE | ID: mdl-37024893

RESUMO

BACKGROUND: Breast cancer (BC) is one of the most common cancers among women. Since diverse features can be collected, how to stably select the powerful ones for accurate BC diagnosis remains challenging. METHODS: A hybrid framework is designed for successively investigating both feature ranking (FR) stability and cancer diagnosis effectiveness. Specifically, on 4 BC datasets (BCDR-F03, WDBC, GSE10810 and GSE15852), the stability of 23 FR algorithms is evaluated via an advanced estimator (S), and the predictive power of the stable feature ranks is further tested by using different machine learning classifiers. RESULTS: Experimental results identify 3 algorithms achieving good stability ([Formula: see text]) on the four datasets and generalized Fisher score (GFS) leading to state-of-the-art performance. Moreover, GFS ranks suggest that shape features are crucial in BC image analysis (BCDR-F03 and WDBC) and that using a few genes can well differentiate benign and malignant tumor cases (GSE10810 and GSE15852). CONCLUSIONS: The proposed framework recognizes a stable FR algorithm for accurate BC diagnosis. Stable and effective features could deepen the understanding of BC diagnosis and related decision-making applications.


Assuntos
Neoplasias da Mama , Feminino , Humanos , Neoplasias da Mama/diagnóstico , Algoritmos , Aprendizado de Máquina
17.
Comput Biol Med ; 158: 106875, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37058759

RESUMO

Glioma is heterogeneous disease that requires classification into subtypes with similar clinical phenotypes, prognosis or treatment responses. Metabolic-protein interaction (MPI) can provide meaningful insights into cancer heterogeneity. Moreover, the potential of lipids and lactate for identifying prognostic subtypes of glioma remains relatively unexplored. Therefore, we proposed a method to construct an MPI relationship matrix (MPIRM) based on a triple-layer network (Tri-MPN) combined with mRNA expression, and processed the MPIRM by deep learning to identify glioma prognostic subtypes. These Subtypes with significant differences in prognosis were detected in glioma (p-value < 2e-16, 95% CI). These subtypes had a strong correlation in immune infiltration, mutational signatures and pathway signatures. This study demonstrated the effectiveness of node interaction from MPI networks in understanding the heterogeneity of glioma prognosis.


Assuntos
Aprendizado Profundo , Glioma , Humanos , Perfilação da Expressão Gênica/métodos , Glioma/genética , Glioma/metabolismo
18.
Front Oncol ; 13: 1127866, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36910636

RESUMO

Objective: To develop a contrast learning-based generative (CLG) model for the generation of high-quality synthetic computed tomography (sCT) from low-quality cone-beam CT (CBCT). The CLG model improves the performance of deformable image registration (DIR). Methods: This study included 100 post-breast-conserving patients with the pCT images, CBCT images, and the target contours, which the physicians delineated. The CT images were generated from CBCT images via the proposed CLG model. We used the Sct images as the fixed images instead of the CBCT images to achieve the multi-modality image registration accurately. The deformation vector field is applied to propagate the target contour from the pCT to CBCT to realize the automatic target segmentation on CBCT images. We calculate the Dice similarity coefficient (DSC), 95 % Hausdorff distance (HD95), and average surface distance (ASD) between the prediction and reference segmentation to evaluate the proposed method. Results: The DSC, HD95, and ASD of the target contours with the proposed method were 0.87 ± 0.04, 4.55 ± 2.18, and 1.41 ± 0.56, respectively. Compared with the traditional method without the synthetic CT assisted (0.86 ± 0.05, 5.17 ± 2.60, and 1.55 ± 0.72), the proposed method was outperformed, especially in the soft tissue target, such as the tumor bed region. Conclusion: The CLG model proposed in this study can create the high-quality sCT from low-quality CBCT and improve the performance of DIR between the CBCT and the pCT. The target segmentation accuracy is better than using the traditional DIR.

19.
J Appl Clin Med Phys ; 24(7): e13942, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36867441

RESUMO

BACKGROUND: Intensity-Modulated Radiation Therapy (IMRT) has been the standard of care for many types of tumors. However, treatment planning for IMRT is a time-consuming and labor-intensive process. PURPOSE: To alleviate this tedious planning process, a novel deep learning based dose prediction algorithm (TrDosePred) was developed for head and neck cancers. METHODS: The proposed TrDosePred, which generated the dose distribution from a contoured CT image, was a U-shape network constructed with a convolutional patch embedding and several local self-attention based transformers. Data augmentation and ensemble approach were used for further improvement. It was trained based on the dataset from Open Knowledge-Based Planning Challenge (OpenKBP). The performance of TrDosePred was evaluated with two mean absolute error (MAE) based scores utilized by OpenKBP challenge (i.e., Dose score and DVH score) and compared to the top three approaches of the challenge. In addition, several state-of-the-art methods were implemented and compared to TrDosePred. RESULTS: The TrDosePred ensemble achieved the dose score of 2.426 Gy and the DVH score of 1.592 Gy on the test dataset, ranking at 3rd and 9th respectively in the leaderboard on CodaLab as of writing. In terms of DVH metrics, on average, the relative MAE against the clinical plans was 2.25% for targets and 2.17% for organs at risk. CONCLUSIONS: A transformer-based framework TrDosePred was developed for dose prediction. The results showed a comparable or superior performance as compared to the previous state-of-the-art approaches, demonstrating the potential of transformer to boost the treatment planning procedures.


Assuntos
Aprendizado Profundo , Neoplasias de Cabeça e Pescoço , Radioterapia de Intensidade Modulada , Humanos , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador/métodos , Neoplasias de Cabeça e Pescoço/radioterapia , Algoritmos , Órgãos em Risco
20.
Bioengineering (Basel) ; 10(2)2023 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-36829638

RESUMO

Two-dimensional (2D)/three-dimensional (3D) registration is critical in clinical applications. However, existing methods suffer from long alignment times and high doses. In this paper, a non-rigid 2D/3D registration method based on deep learning with orthogonal angle projections is proposed. The application can quickly achieve alignment using only two orthogonal angle projections. We tested the method with lungs (with and without tumors) and phantom data. The results show that the Dice and normalized cross-correlations are greater than 0.97 and 0.92, respectively, and the registration time is less than 1.2 seconds. In addition, the proposed model showed the ability to track lung tumors, highlighting the clinical potential of the proposed method.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...